Supplementary Materials

This website serves as the supplementary materials for the paper titled “Hierarchical Point Process Models for Recurring Safety Critical Events Involving Commercial Truck Drivers: A Reliability Framework for Human Performance Modeling”, published on Journal of Quality Technology.

Two models are proposed in this paper:

  1. A Bayesian hierarchical non-homogeneous Poisson process (NHPP) with the power law process (PLP) intensity function,
  2. A Bayesian hierarchical jump power law process (JPLP)

The first section shows the R code to simulate data following the NHPP and JPLP data generating process. Only 10 drivers are assumed to minimize the simulation and Bayesian estimation time. Then, Stan code to perform Bayesian hierarchical PLP and JPLP estimation is presented in the next section. In the third section, R code for performing Hamiltonian Monte Carlo for the simulated data are demonstrated. In the last section, we present the R code to get summary statistics (posterior mean, 95% credible interval, Gelman-Rubin statistics \(\hat{R}\), and effective sample size ESS) from the posterior distribution are shown.

Simulate data

Simulate NHPP data with PLP intensity function

pacman::p_load(dplyr, rstan, broom)
source('Functions/NHPP_functions.R')
set.seed(123)
df_NHPP = sim_hier_nhpp(D = 10, beta = 1.2)
str(df_NHPP$hier_dat)
List of 9
 $ N           : int 255
 $ K           : num 3
 $ S           : int 95
 $ D           : int 10
 $ id          : int [1:95] 1 1 1 1 1 1 1 1 1 1 ...
 $ tau         : num [1:95] 11.09 9.62 10.42 8.22 10.38 ...
 $ event_time  : num [1:255] 1.04 3.29 4.15 5.16 6.49 ...
 $ group_size  : int [1:95] 8 7 1 0 6 16 5 2 3 8 ...
 $ X_predictors:'data.frame':   95 obs. of  3 variables:
  ..$ x1: num [1:95] 0.6651 -0.0857 0.9146 2.0706 0.8546 ...
  ..$ x2: num [1:95] 1.9446 0.0545 1.0107 2.6988 1.0172 ...
  ..$ x3: int [1:95] 0 3 3 1 1 1 2 4 2 2 ...

NHPP is estimated at shift-level. Here are the definition of the simulated NHPP data passed to Stan:

Simulate JPLP data

source('Functions/JPLP_functions.R')
set.seed(123)
df_JPLP = sim_hier_JPLP(D = 10, beta = 1.2)
str(df_JPLP$stan_dt)
List of 11
 $ N           : int 162
 $ K           : num 3
 $ S           : int 331
 $ D           : num 10
 $ id          : int [1:331] 1 1 1 1 1 1 1 1 1 1 ...
 $ r_trip      : int [1:331] 1 2 3 4 5 1 2 3 4 1 ...
 $ t_trip_start: num [1:331] 0 1.66 4.71 6.44 9.16 0 2.94 5.29 6.79 0 ...
 $ t_trip_end  : num [1:331] 1.66 4.71 6.44 9.16 11.09 ...
 $ event_time  : num [1:162] 2.71 4.57 5.7 6.27 6.66 ...
 $ group_size  : int [1:331] 0 2 2 1 0 1 1 1 0 1 ...
 $ X_predictors: num [1:331, 1:3] 0.665 0.665 0.665 0.665 0.665 ...
  ..- attr(*, "dimnames")=List of 2
  .. ..$ : chr [1:331] "1" "1.1" "1.2" "1.3" ...
  .. ..$ : chr [1:3] "x1" "x2" "x3"

In contrast to the NHPP, JPLP is estimated at trip level. Here are the definition of the simulated JPLP data passed to Stan:

Note that the trip start and end time are counted from the beginning of the shift, and the rest time is excluded from calculation.

Stan code

The Stan codes for both models are written using self-define likelihood functions. These likelihood functions have been derived in the manuscript.

NHPP with PLP intensity function

stan_NHPP = 
[2224 chars quoted with ''']

JPLP

stan_JPLP = 
[2898 chars quoted with ''']

Bayesian estimation for simulated data

NHPP with PLP intensity function

fit_NHPP = stan("Stan/nhpp_plp_hierarchical.stan",
                chains = 4, iter = 5000, warmup = 1000, data = df_NHPP$hier_dat)

SAMPLING FOR MODEL 'nhpp_plp_hierarchical' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 5000 [  0%]  (Warmup)
Chain 1: Iteration:  500 / 5000 [ 10%]  (Warmup)
Chain 1: Iteration: 1000 / 5000 [ 20%]  (Warmup)
Chain 1: Iteration: 1001 / 5000 [ 20%]  (Sampling)
Chain 1: Iteration: 1500 / 5000 [ 30%]  (Sampling)
Chain 1: Iteration: 2000 / 5000 [ 40%]  (Sampling)
Chain 1: Iteration: 2500 / 5000 [ 50%]  (Sampling)
Chain 1: Iteration: 3000 / 5000 [ 60%]  (Sampling)
Chain 1: Iteration: 3500 / 5000 [ 70%]  (Sampling)
Chain 1: Iteration: 4000 / 5000 [ 80%]  (Sampling)
Chain 1: Iteration: 4500 / 5000 [ 90%]  (Sampling)
Chain 1: Iteration: 5000 / 5000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 2.267 seconds (Warm-up)
Chain 1:                8.579 seconds (Sampling)
Chain 1:                10.846 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'nhpp_plp_hierarchical' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 5000 [  0%]  (Warmup)
Chain 2: Iteration:  500 / 5000 [ 10%]  (Warmup)
Chain 2: Iteration: 1000 / 5000 [ 20%]  (Warmup)
Chain 2: Iteration: 1001 / 5000 [ 20%]  (Sampling)
Chain 2: Iteration: 1500 / 5000 [ 30%]  (Sampling)
Chain 2: Iteration: 2000 / 5000 [ 40%]  (Sampling)
Chain 2: Iteration: 2500 / 5000 [ 50%]  (Sampling)
Chain 2: Iteration: 3000 / 5000 [ 60%]  (Sampling)
Chain 2: Iteration: 3500 / 5000 [ 70%]  (Sampling)
Chain 2: Iteration: 4000 / 5000 [ 80%]  (Sampling)
Chain 2: Iteration: 4500 / 5000 [ 90%]  (Sampling)
Chain 2: Iteration: 5000 / 5000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 2.035 seconds (Warm-up)
Chain 2:                8.116 seconds (Sampling)
Chain 2:                10.151 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'nhpp_plp_hierarchical' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 0 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 5000 [  0%]  (Warmup)
Chain 3: Iteration:  500 / 5000 [ 10%]  (Warmup)
Chain 3: Iteration: 1000 / 5000 [ 20%]  (Warmup)
Chain 3: Iteration: 1001 / 5000 [ 20%]  (Sampling)
Chain 3: Iteration: 1500 / 5000 [ 30%]  (Sampling)
Chain 3: Iteration: 2000 / 5000 [ 40%]  (Sampling)
Chain 3: Iteration: 2500 / 5000 [ 50%]  (Sampling)
Chain 3: Iteration: 3000 / 5000 [ 60%]  (Sampling)
Chain 3: Iteration: 3500 / 5000 [ 70%]  (Sampling)
Chain 3: Iteration: 4000 / 5000 [ 80%]  (Sampling)
Chain 3: Iteration: 4500 / 5000 [ 90%]  (Sampling)
Chain 3: Iteration: 5000 / 5000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 2.371 seconds (Warm-up)
Chain 3:                7.987 seconds (Sampling)
Chain 3:                10.358 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'nhpp_plp_hierarchical' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 0 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 5000 [  0%]  (Warmup)
Chain 4: Iteration:  500 / 5000 [ 10%]  (Warmup)
Chain 4: Iteration: 1000 / 5000 [ 20%]  (Warmup)
Chain 4: Iteration: 1001 / 5000 [ 20%]  (Sampling)
Chain 4: Iteration: 1500 / 5000 [ 30%]  (Sampling)
Chain 4: Iteration: 2000 / 5000 [ 40%]  (Sampling)
Chain 4: Iteration: 2500 / 5000 [ 50%]  (Sampling)
Chain 4: Iteration: 3000 / 5000 [ 60%]  (Sampling)
Chain 4: Iteration: 3500 / 5000 [ 70%]  (Sampling)
Chain 4: Iteration: 4000 / 5000 [ 80%]  (Sampling)
Chain 4: Iteration: 4500 / 5000 [ 90%]  (Sampling)
Chain 4: Iteration: 5000 / 5000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 2.238 seconds (Warm-up)
Chain 4:                8.762 seconds (Sampling)
Chain 4:                11 seconds (Total)
Chain 4: 

JPLP

fit_JPLP = stan("Stan/jplp_hierarchical.stan",
                chains = 4, iter = 5000, warmup = 1000, data = df_JPLP$stan_dt)

SAMPLING FOR MODEL 'jplp_hierarchical' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.001 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 10 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 5000 [  0%]  (Warmup)
Chain 1: Iteration:  500 / 5000 [ 10%]  (Warmup)
Chain 1: Iteration: 1000 / 5000 [ 20%]  (Warmup)
Chain 1: Iteration: 1001 / 5000 [ 20%]  (Sampling)
Chain 1: Iteration: 1500 / 5000 [ 30%]  (Sampling)
Chain 1: Iteration: 2000 / 5000 [ 40%]  (Sampling)
Chain 1: Iteration: 2500 / 5000 [ 50%]  (Sampling)
Chain 1: Iteration: 3000 / 5000 [ 60%]  (Sampling)
Chain 1: Iteration: 3500 / 5000 [ 70%]  (Sampling)
Chain 1: Iteration: 4000 / 5000 [ 80%]  (Sampling)
Chain 1: Iteration: 4500 / 5000 [ 90%]  (Sampling)
Chain 1: Iteration: 5000 / 5000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 10.023 seconds (Warm-up)
Chain 1:                35.274 seconds (Sampling)
Chain 1:                45.297 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'jplp_hierarchical' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.001 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 10 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 5000 [  0%]  (Warmup)
Chain 2: Iteration:  500 / 5000 [ 10%]  (Warmup)
Chain 2: Iteration: 1000 / 5000 [ 20%]  (Warmup)
Chain 2: Iteration: 1001 / 5000 [ 20%]  (Sampling)
Chain 2: Iteration: 1500 / 5000 [ 30%]  (Sampling)
Chain 2: Iteration: 2000 / 5000 [ 40%]  (Sampling)
Chain 2: Iteration: 2500 / 5000 [ 50%]  (Sampling)
Chain 2: Iteration: 3000 / 5000 [ 60%]  (Sampling)
Chain 2: Iteration: 3500 / 5000 [ 70%]  (Sampling)
Chain 2: Iteration: 4000 / 5000 [ 80%]  (Sampling)
Chain 2: Iteration: 4500 / 5000 [ 90%]  (Sampling)
Chain 2: Iteration: 5000 / 5000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 8.626 seconds (Warm-up)
Chain 2:                33.912 seconds (Sampling)
Chain 2:                42.538 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'jplp_hierarchical' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 0.001 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 10 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 5000 [  0%]  (Warmup)
Chain 3: Iteration:  500 / 5000 [ 10%]  (Warmup)
Chain 3: Iteration: 1000 / 5000 [ 20%]  (Warmup)
Chain 3: Iteration: 1001 / 5000 [ 20%]  (Sampling)
Chain 3: Iteration: 1500 / 5000 [ 30%]  (Sampling)
Chain 3: Iteration: 2000 / 5000 [ 40%]  (Sampling)
Chain 3: Iteration: 2500 / 5000 [ 50%]  (Sampling)
Chain 3: Iteration: 3000 / 5000 [ 60%]  (Sampling)
Chain 3: Iteration: 3500 / 5000 [ 70%]  (Sampling)
Chain 3: Iteration: 4000 / 5000 [ 80%]  (Sampling)
Chain 3: Iteration: 4500 / 5000 [ 90%]  (Sampling)
Chain 3: Iteration: 5000 / 5000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 9.224 seconds (Warm-up)
Chain 3:                32.937 seconds (Sampling)
Chain 3:                42.161 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'jplp_hierarchical' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 0.001 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 10 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 5000 [  0%]  (Warmup)
Chain 4: Iteration:  500 / 5000 [ 10%]  (Warmup)
Chain 4: Iteration: 1000 / 5000 [ 20%]  (Warmup)
Chain 4: Iteration: 1001 / 5000 [ 20%]  (Sampling)
Chain 4: Iteration: 1500 / 5000 [ 30%]  (Sampling)
Chain 4: Iteration: 2000 / 5000 [ 40%]  (Sampling)
Chain 4: Iteration: 2500 / 5000 [ 50%]  (Sampling)
Chain 4: Iteration: 3000 / 5000 [ 60%]  (Sampling)
Chain 4: Iteration: 3500 / 5000 [ 70%]  (Sampling)
Chain 4: Iteration: 4000 / 5000 [ 80%]  (Sampling)
Chain 4: Iteration: 4500 / 5000 [ 90%]  (Sampling)
Chain 4: Iteration: 5000 / 5000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 9.612 seconds (Warm-up)
Chain 4:                31.921 seconds (Sampling)
Chain 4:                41.533 seconds (Total)
Chain 4: 

Posterior summary and diagnostic statistics

NHPP with PLP intensity function

Posterior mean, 95% credible interval, ESS, and \(\hat{R}\)

est_NHPP = broom.mixed::tidy(fit_NHPP, conf.int = T, rhat = T, ess = T)
est_NHPP
# A tibble: 27 x 7
   term    estimate std.error conf.low conf.high  rhat   ess
   <chr>      <dbl>     <dbl>    <dbl>     <dbl> <dbl> <int>
 1 mu0        2.01     0.207     1.60      2.43   1.00 12477
 2 sigma0     0.527    0.183     0.299     1.01   1.00  9779
 3 beta       1.26     0.0785    1.12      1.42   1.00  9832
 4 R1_K[1]    0.933    0.0929    0.764     1.13   1.00  9549
 5 R1_K[2]    0.271    0.0706    0.139     0.415  1.00 16499
 6 R1_K[3]    0.211    0.0504    0.116     0.313  1.00 14545
 7 R0[1]      1.64     0.128     1.38      1.88   1.00 12865
 8 R0[2]      1.69     0.150     1.40      1.99   1.00 14187
 9 R0[3]      2.87     0.301     2.37      3.55   1.00 13207
10 R0[4]      1.94     0.181     1.60      2.31   1.00 20213
# ... with 17 more rows

Trace plots for selected parameters

t_NHPP = stan_trace(fit_NHPP, pars = c('mu0', 'sigma0', 'beta'), ncol = 1)
t_NHPP

JPLP

Posterior mean, 95% credible interval, ESS, and \(\hat{R}\)

est_JPLP = broom.mixed::tidy(fit_JPLP, conf.int = T, rhat = T, ess = T)
est_JPLP
# A tibble: 28 x 7
   term    estimate std.error conf.low conf.high  rhat   ess
   <chr>      <dbl>     <dbl>    <dbl>     <dbl> <dbl> <int>
 1 mu0        2.11     0.222    1.68       2.57   1.00  6575
 2 sigma0     0.506    0.183    0.266      0.987  1.00  6908
 3 beta       1.18     0.119    0.969      1.43   1.00  5024
 4 kappa      0.800    0.0787   0.651      0.955  1.00  4704
 5 R1_K[1]    0.995    0.136    0.762      1.29   1.00  4210
 6 R1_K[2]    0.216    0.0915   0.0507     0.411  1.00  9781
 7 R1_K[3]    0.235    0.0693   0.110      0.381  1.00  7657
 8 R0[1]      1.77     0.187    1.42       2.16   1.00  6752
 9 R0[2]      1.99     0.220    1.60       2.46   1.00  6687
10 R0[3]      2.55     0.299    2.04       3.22   1.00  6678
# ... with 18 more rows

Trace plots for selected parameters

t_JPLP = stan_trace(fit_JPLP, pars = c('mu0', 'sigma0', 'beta', 'kappa'), ncol = 1)
t_JPLP

Supplementary trace plots for selected parameters in the manuscript

Trace plots below are generated from the 496 large commercial truck drivers, which is demonstrated as a case study in the main manuscript.

knitr::include_graphics("Figures/Aim3_trace_plot.jpeg")

Session Information

R version 4.0.5 (2021-03-31)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 19042)

Matrix products: default

locale:
[1] LC_COLLATE=Chinese (Simplified)_China.936 
[2] LC_CTYPE=Chinese (Simplified)_China.936   
[3] LC_MONETARY=Chinese (Simplified)_China.936
[4] LC_NUMERIC=C                              
[5] LC_TIME=Chinese (Simplified)_China.936    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods  
[7] base     

other attached packages:
[1] broom_0.7.5          rstan_2.21.2         StanHeaders_2.21.0-7
[4] ggplot2_3.3.3        dplyr_1.0.5         

loaded via a namespace (and not attached):
 [1] Rcpp_1.0.6         lattice_0.20-41    tidyr_1.1.3       
 [4] prettyunits_1.1.1  ps_1.6.0           digest_0.6.27     
 [7] utf8_1.2.1         V8_3.4.2           R6_2.5.0          
[10] plyr_1.8.6         backports_1.2.1    stats4_4.0.5      
[13] evaluate_0.14      coda_0.19-4        highr_0.8         
[16] pillar_1.5.1       rlang_0.4.10       curl_4.3          
[19] rstudioapi_0.13    callr_3.6.0        jquerylib_0.1.4   
[22] Matrix_1.3-2       rmarkdown_2.7      labeling_0.4.2    
[25] stringr_1.4.0      TMB_1.7.19         loo_2.4.1         
[28] munsell_0.5.0      compiler_4.0.5     xfun_0.22         
[31] pkgconfig_2.0.3    pkgbuild_1.2.0     htmltools_0.5.1.1 
[34] broom.mixed_0.2.6  downlit_0.2.1      tidyselect_1.1.0  
[37] tibble_3.1.0       gridExtra_2.3      codetools_0.2-18  
[40] matrixStats_0.58.0 fansi_0.4.2        crayon_1.4.1      
[43] withr_2.4.1        grid_4.0.5         nlme_3.1-152      
[46] jsonlite_1.7.2     gtable_0.3.0       lifecycle_1.0.0   
[49] pacman_0.5.1       magrittr_2.0.1     scales_1.1.1      
[52] RcppParallel_5.1.4 cli_2.4.0          stringi_1.5.3     
[55] farver_2.1.0       reshape2_1.4.4     bslib_0.2.5.1     
[58] ellipsis_0.3.1     generics_0.1.0     vctrs_0.3.7       
[61] distill_1.2        tools_4.0.5        glue_1.4.2        
[64] purrr_0.3.4        processx_3.5.1     parallel_4.0.5    
[67] yaml_2.2.1         inline_0.3.18      colorspace_2.0-0  
[70] knitr_1.31         sass_0.4.0